In this project, you'll use generative adversarial networks to generate new images of faces.
You'll be using two datasets in this project:
Since the celebA dataset is complex and you're doing GANs in a project for the first time, we want you to test your neural network on MNIST before CelebA. Running the GANs on MNIST will allow you to see how well your model trains sooner.
If you're using FloydHub, set data_dir to "/input" and use the FloydHub data ID "R5KrjnANiKVhLWAkpXhNBe".
data_dir = './data'
# FloydHub - Use with data ID "R5KrjnANiKVhLWAkpXhNBe"
#data_dir = '/input'
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
helper.download_extract('mnist', data_dir)
helper.download_extract('celeba', data_dir)
show_n_images = 9
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
%matplotlib inline
import os
from glob import glob
from matplotlib import pyplot
mnist_images = helper.get_batch(glob(os.path.join(data_dir, 'mnist/*.jpg'))[:show_n_images], 28, 28, 'L')
pyplot.imshow(helper.images_square_grid(mnist_images, 'L'), cmap='gray')
The CelebFaces Attributes Dataset (CelebA) dataset contains over 200,000 celebrity images with annotations. Since you're going to be generating faces, you won't need the annotations. You can view the first number of examples by changing show_n_images.
show_n_images = 16
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
mnist_images = helper.get_batch(glob(os.path.join(data_dir, 'img_align_celeba/*.jpg'))[:show_n_images], 28, 28, 'RGB')
pyplot.imshow(helper.images_square_grid(mnist_images, 'RGB'))
Since the project's main focus is on building the GANs, we'll preprocess the data for you. The values of the MNIST and CelebA dataset will be in the range of -0.5 to 0.5 of 28x28 dimensional images. The CelebA images will be cropped to remove parts of the image that don't include a face, then resized down to 28x28.
The MNIST images are black and white images with a single color channel while the CelebA images have 3 color channels (RGB color channel).
You'll build the components necessary to build a GANs by implementing the following functions below:
model_inputsdiscriminatorgeneratormodel_lossmodel_opttrainThis will check to make sure you have the correct version of TensorFlow and access to a GPU
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer. You are using {}'.format(tf.__version__)
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
Implement the model_inputs function to create TF Placeholders for the Neural Network. It should create the following placeholders:
image_width, image_height, and image_channels.z_dim.Return the placeholders in the following the tuple (tensor of real input images, tensor of z data)
import problem_unittests as tests
def model_inputs(image_width, image_height, image_channels, z_dim):
"""
Create the model inputs
:param image_width: The input image width
:param image_height: The input image height
:param image_channels: The number of image channels
:param z_dim: The dimension of Z
:return: Tuple of (tensor of real input images, tensor of z data, learning rate)
"""
# TODO: Implement Function
# create the placeholder for real_inputs. First dimension in tuple = 'None', which is confusing, but means
# we can have ANY number of instances in the batch
inputs_real = tf.placeholder(tf.float32, (None, image_width, image_height, image_channels),
name='inputs_real')
# create the placeholder for the z inputs. These will be used by the generator to generate images
inputs_z = tf.placeholder(tf.float32, (None, z_dim), name="inputs_z")
# create the learning rate.
learn_rate = tf.placeholder(tf.float32, (None), name="learning_rate")
return inputs_real, inputs_z, learn_rate
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_model_inputs(model_inputs)
Implement discriminator to create a discriminator neural network that discriminates on images. This function should be able to reuse the variables in the neural network. Use tf.variable_scope with a scope name of "discriminator" to allow the variables to be reused. The function should return a tuple of (tensor output of the discriminator, tensor logits of the discriminator).
def discriminator(images, reuse=False):
"""
Create the discriminator network
:param images: Tensor of input image(s)
:param reuse: Boolean if the weights should be reused
:return: Tuple of (tensor output of the discriminator, tensor logits of the discriminator)
"""
# TODO: Implement Function
# set my alpha value for the leaky relu - using the same as the Udacity DCGAN walkthrough
alpha = 0.2
# reviewer comment - try using dropout in discriminator with low rate of dropping.
# use dropout after each conv layer except the last and definitely not after the fc layer
# reviewer comment - try using xavier initialisation. NOTE for the discriminator, the only one that
# is a fully connected layer is the first one, so this is the only one to initialise.
# DCFW Note - am playing around with the order of bn/relu/drop. Previously was b/r/d - now trying r/d/b
# as per https://stackoverflow.com/questions/39691902/ordering-of-batch-normalization-and-dropout-in-tensorflow
dropout_rate = 0.95
with tf.variable_scope("discriminator", reuse=reuse):
# input layer is 28*28*3 for the celebs, 28*28*1 for the MNIST
x1 = tf.layers.conv2d(images, 64, 5, strides=2, padding='same',
kernel_initializer=tf.contrib.layers.xavier_initializer_conv2d())
# don't apply the batch normalisation to the input layer
x1 = tf.maximum(alpha * x1, x1) # leaky relu process
x1 = tf.nn.dropout(x1, keep_prob=dropout_rate) # drop process
# layer is now 14 * 14 * 64 (i.e. convoluted stack of 'images' of 14*14 across 64 filters)
x2 = tf.layers.conv2d(x1, 128, 5, strides=2, padding='same')
x2 = tf.layers.batch_normalization(x2, training=True) # batch normalization layer
x2 = tf.maximum(alpha * x2, x2) # leaky relu layer
x2 = tf.nn.dropout(x2, keep_prob=dropout_rate) # dropout layer
# print('x2.shape is {}'.format(x2.shape))
# layer is now 7 * 7 * 128
x3 = tf.layers.conv2d(x2, 256, 5, strides=2, padding='same') # convolution layer
x3 = tf.layers.batch_normalization(x3, training=True) # batch normalization layer
x3 = tf.maximum(alpha * x3, x3) # leaky relu layer
# NOT used dropping at this layer, following reviewer comment -
# don't use it on last conv layer.
print('x3.shape is {}'.format(x3.shape))
# layer is now 4*4* 256 (or maybe 3*3? * 256)
# Flatten this final tensor
flat = tf.reshape(x3, (-1, 4*4*256)) # will find out here if my dimension calcs are correct
print('flat.shape is {}'.format(flat.shape))
logits = tf.layers.dense(flat, 1) # output to one because it is a simple yes/no as to whether it is a "True" pic
out = tf.sigmoid(logits)
return out, logits
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_discriminator(discriminator, tf)
Implement generator to generate an image using z. This function should be able to reuse the variables in the neural network. Use tf.variable_scope with a scope name of "generator" to allow the variables to be reused. The function should return the generated 28 x 28 x out_channel_dim images.
def generator(z, out_channel_dim, is_train=True):
"""
Create the generator network
:param z: Input z
:param out_channel_dim: The number of channels in the output image
:param is_train: Boolean if generator is being used for training
:return: The tensor output of the generator
"""
# TODO: Implement Function
# set my alpha level for the leaky relus - using the same as the Udacity DCGAN walk-through
alpha = 0.2
# reviewer comment - try using xavier intialisation for the weights. For the generator there are several
# fully connected layers - use it for each?
# reviwer3 comment - DON'T save the xavier_weights as a variable and then apply it to each, as it will use
# the same weights in each, thus losing all stochastic behaviour
# reviewer3 comment - use dropout as per discriminator. Also trying to order the processes differently.
dropout_rate = 0.95
with tf.variable_scope("generator", reuse=not is_train):
# this threw me for AGES - see Slack channel. You want to set reuse to True when is_training is False and
# vice versa. When you are training you don't re-use the variables.
# Reviewer suggestion - use 3 layers (previously had 2) - suggests 512, 256, 128 to output.
# create first, fully connected layer
x1_dense = tf.layers.dense(z, 7*7*512) # match the discriminator - work backwards
# reshape it into a convolutional tensor and start the convolutional stack.
x1 = tf.reshape(x1_dense, (-1, 7, 7, 512))
x1 = tf.layers.batch_normalization(x1, training= is_train) # batch norm process
x1 = tf.maximum(alpha * x1, x1) # leaky relu process
x1 = tf.nn.dropout(x1, keep_prob=dropout_rate) # dropout process
# print("x1.shape is {}".format(x1.shape))
# layer is 7 * 7 * 512
x2 = tf.layers.conv2d_transpose(x1, 256, 5, strides=2, padding='same',
kernel_initializer=tf.contrib.layers.xavier_initializer_conv2d())
x2 = tf.layers.batch_normalization(x2, training=is_train) # batch norm process
x2 = tf.maximum(alpha * x2, x2) # leaky relu process
x2 = tf.nn.dropout(x2, keep_prob=dropout_rate) # dropout process
#print("x2.shape is {}".format(x2.shape))
# layer is now 14 * 14 * 256
x3 = tf.layers.conv2d_transpose(x2, 128, 5, strides=2, padding='same',
kernel_initializer=tf.contrib.layers.xavier_initializer_conv2d())
x3 = tf.layers.batch_normalization(x3, training=is_train) # batch norm process
x3 = tf.maximum(alpha * x3, x3) # leaky relu process
x3 = tf.nn.dropout(x3, keep_prob=dropout_rate) # dropout process
# print("x3.shape is {}".format(x3.shape))
# layer is now 28 * 28 * 128
# create the output layer
logits = tf.layers.conv2d_transpose(x3, out_channel_dim, 5, strides=1, padding='same',
kernel_initializer=tf.contrib.layers.xavier_initializer_conv2d())
# this is how we solve for the different RGB channel dimension diffs between the MNIST and celeba
# as the number of 'filters' is now the number of channels. Same thing.
# Note we need to set the strides at 1, as the image shape is already 28 * 28 and we don't want to increase
# it to 56*56.
# layer is now 28 * 28 * 3 or 28 * 28 * 1
#print('logits has shape: {}'.format(logits.shape))
out = tf.tanh(logits)
return out
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_generator(generator, tf)
Implement model_loss to build the GANs for training and calculate the loss. The function should return a tuple of (discriminator loss, generator loss). Use the following functions you implemented:
discriminator(images, reuse=False)generator(z, out_channel_dim, is_train=True)def model_loss(input_real, input_z, out_channel_dim):
"""
Get the loss for the discriminator and generator
:param input_real: Images from the real dataset
:param input_z: Z input
:param out_channel_dim: The number of channels in the output image
:return: A tuple of (discriminator loss, generator loss)
"""
# TODO: Implement Function
# reviewer #2 comment - try using smoothing on discriminator labels to improve performance
smooth =0.15
g_model = generator(input_z, out_channel_dim)
d_model_real, d_logits_real = discriminator(input_real)
d_model_fake, d_logits_fake = discriminator(g_model, reuse=True)
# reviewer comment - to try to prevent the discriminator from being too strong, only the discriminator labels
# (one-sided) are reduced from 1 to 0.9. This is known as one-sided label smoothing.
# generation is a much harder task, so this is to prevent the discriminator being too strong
d_loss_real = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(
logits=d_logits_real, labels = tf.ones_like(d_model_real) * (1-smooth))
)
d_loss_fake = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(
logits=d_logits_fake, labels = tf.zeros_like(d_model_fake))
)
d_loss = d_loss_real + d_loss_fake
# NOTE - only the discriminator labels are smoothed, NOT the generator labels (see above)
g_loss = tf.reduce_mean(tf.nn.sigmoid_cross_entropy_with_logits(
logits=d_logits_fake, labels=tf.ones_like(d_model_fake))
)
return d_loss, g_loss
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_model_loss(model_loss)
Implement model_opt to create the optimization operations for the GANs. Use tf.trainable_variables to get all the trainable variables. Filter the variables with names that are in the discriminator and generator scope names. The function should return a tuple of (discriminator training operation, generator training operation).
def model_opt(d_loss, g_loss, learning_rate, beta1):
"""
Get optimization operations
:param d_loss: Discriminator loss Tensor
:param g_loss: Generator loss Tensor
:param learning_rate: Learning Rate Placeholder
:param beta1: The exponential decay rate for the 1st moment in the optimizer
:return: A tuple of (discriminator training operation, generator training operation)
"""
# TODO: Implement Function
tvars = tf.trainable_variables()
d_vars = [var for var in tvars if var.name.startswith('discriminator')]
#print(d_vars)
g_vars = [var for var in tvars if var.name.startswith('generator')]
#print(g_vars)
with tf.control_dependencies(tf.get_collection(tf.GraphKeys.UPDATE_OPS)):
d_train_opt= tf.train.AdamOptimizer(learning_rate, beta1=beta1).minimize(d_loss, var_list=d_vars)
g_train_opt= tf.train.AdamOptimizer(learning_rate, beta1=beta1).minimize(g_loss, var_list=g_vars)
return d_train_opt, g_train_opt
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_model_opt(model_opt, tf)
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import numpy as np
def show_generator_output(sess, n_images, input_z, out_channel_dim, image_mode):
"""
Show example output for the generator
:param sess: TensorFlow session
:param n_images: Number of Images to display
:param input_z: Input Z Tensor
:param out_channel_dim: The number of channels in the output image
:param image_mode: The mode to use for images ("RGB" or "L")
"""
cmap = None if image_mode == 'RGB' else 'gray'
z_dim = input_z.get_shape().as_list()[-1]
example_z = np.random.uniform(-1, 1, size=[n_images, z_dim])
samples = sess.run(
generator(input_z, out_channel_dim, False),
feed_dict={input_z: example_z})
images_grid = helper.images_square_grid(samples, image_mode)
pyplot.imshow(images_grid, cmap=cmap)
pyplot.show()
Implement train to build and train the GANs. Use the following functions you implemented:
model_inputs(image_width, image_height, image_channels, z_dim)model_loss(input_real, input_z, out_channel_dim)model_opt(d_loss, g_loss, learning_rate, beta1)Use the show_generator_output to show generator output while you train. Running show_generator_output for every batch will drastically increase training time and increase the size of the notebook. It's recommended to print the generator output every 100 batches.
def train(epoch_count, batch_size, z_dim, learning_rate, beta1, get_batches, data_shape, data_image_mode):
"""
Train the GAN
:param epoch_count: Number of epochs
:param batch_size: Batch Size
:param z_dim: Z dimension
:param learning_rate: Learning Rate
:param beta1: The exponential decay rate for the 1st moment in the optimizer
:param get_batches: Function to get batches
:param data_shape: Shape of the data
:param data_image_mode: The image mode to use for images ("RGB" or "L")
"""
# TODO: Build Model
# saver = tf.train.Saver()
# create the inputs placeholders etc using the model_inputs function
# see playground cell below to see how I found out the dimensions
inputs_real, inputs_z, learn_rate = model_inputs(data_shape[1], data_shape[2], data_shape[3], z_dim)
# create the losses using the model_loss function
d_loss, g_loss = model_loss(inputs_real, inputs_z, data_shape[3])
# optimize those losses using the model_opt() function
d_opt, g_opt = model_opt(d_loss, g_loss, learning_rate=learning_rate, beta1=beta1)
losses = []
# note - I have the losses in a list, but I don't actually return them at the end. This is because I don't
# want to mess around with the Udacity cells when the code is run to generate figures etc. It's for future
# projects, to remind me to store the losses to create graphs of training loss vs epochs
steps = 0
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(epoch_count):
for batch_images in get_batches(batch_size):
steps +=1
# TODO: Train Model
# from Slack channel - the batch_images have data in range -0.5 to +0.5, so double them to get the image
batch_images = batch_images *2
# create random noise z_inputs to seed the generator
batch_z = np.random.uniform(-1, 1, size=(batch_size, z_dim))
# run optimizers
_ = sess.run(d_opt, feed_dict={inputs_real: batch_images,
inputs_z: batch_z,
learn_rate: learning_rate
})
_ = sess.run(g_opt, feed_dict={inputs_real: batch_images,
inputs_z: batch_z,
learn_rate: learning_rate
})
# calculate loss every 25 batches?
if steps % 25 == 0:
train_loss_d = d_loss.eval(feed_dict={
inputs_real: batch_images,
inputs_z: batch_z
})
train_loss_g = g_loss.eval(feed_dict={
inputs_z: batch_z
})
print('Epoch: {}'.format(epoch_i + 1), end=", ")
print('Batch: {}'.format(steps), end=": ")
print('Discr train loss: {:.3f}'.format(train_loss_d), end=", ")
print('Gen train loss: {:.3f}'.format(train_loss_g))
# save losses to view after training:
losses.append((train_loss_d, train_loss_g))
# show the images created every 100 batches?
if steps % 100 == 0:
# as per instructions, run the show_generator_output every 100 batches
show_generator_output(sess, n_images=16, input_z=inputs_z, out_channel_dim=data_shape[3],
image_mode=data_image_mode)
#saver.save(sess, './checkpoints/generator.ckpt')
# DCFW created playground cell to find out attributes etc.
mnist_dataset = helper.Dataset('mnist', glob(os.path.join(data_dir, 'mnist/*.jpg')))
print (mnist_dataset.shape) # finding the dimensions - need this for the inputs
Test your GANs architecture on MNIST. After 2 epochs, the GANs should be able to generate images that look like handwritten digits. Make sure the loss of the generator is lower than the loss of the discriminator or close to 0.
following comments from reviewer, looked at Radford paper for hyperparameter suggestions.
batch_size = 32
z_dim = 128
learning_rate = 0.0005
beta1 = 0.2
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
epochs = 2
mnist_dataset = helper.Dataset('mnist', glob(os.path.join(data_dir, 'mnist/*.jpg')))
with tf.Graph().as_default():
train(epochs, batch_size, z_dim, learning_rate, beta1, mnist_dataset.get_batches,
mnist_dataset.shape, mnist_dataset.image_mode)
Run your GANs on CelebA. It will take around 20 minutes on the average GPU to run one epoch. You can run the whole epoch or stop when it starts to generate realistic faces.
batch_size = 32
z_dim = 128
learning_rate = 0.0005
beta1 = 0.2
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
epochs = 1
celeba_dataset = helper.Dataset('celeba', glob(os.path.join(data_dir, 'img_align_celeba/*.jpg')))
with tf.Graph().as_default():
train(epochs, batch_size, z_dim, learning_rate, beta1, celeba_dataset.get_batches,
celeba_dataset.shape, celeba_dataset.image_mode)
When submitting this project, make sure to run all the cells before saving the notebook. Save the notebook file as "dlnd_face_generation.ipynb" and save it as a HTML file under "File" -> "Download as". Include the "helper.py" and "problem_unittests.py" files in your submission.